翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

mean squared prediction error : ウィキペディア英語版
mean squared prediction error

In statistics the mean squared prediction error of a smoothing or curve fitting procedure is the expected value of the squared difference between the fitted values \widehat and the (unobservable) function ''g''. If the smoothing procedure has operator matrix ''L'', then
:\operatorname(L)=\operatorname\left(g(x_i)-\widehat(x_i)\right)^2\right ).
The MSPE can be decomposed into two terms (just like mean squared error is decomposed into bias and variance); however for MSPE one term is the sum of squared biases of the fitted values and another the sum of variances of the fitted values:
:\operatorname(L)=\sum_^n\left(\operatorname\left()-g(x_i)\right)^2+\sum_^n\operatorname\left().
Note that knowledge of ''g'' is required in order to calculate MSPE exactly.
==Estimation of MSPE==
For the model y_i=g(x_i)+\sigma\varepsilon_i where \varepsilon_i\sim\mathcal(0,1), one may write
:\operatorname(L)=g'(I-L)'(I-L)g+\sigma^2\operatorname\left().
The first term is equivalent to
:\sum_^n\left(\operatorname\left()-g(x_i)\right)^2
=\operatorname\left()-\sigma^2\operatorname\left().
Thus,
:\operatorname(L)=\operatorname\left()-\sigma^2\left(n-2\operatorname\left()\right).
If \sigma^2 is known or well-estimated by \widehat^2, it becomes possible to estimate MSPE by
:\operatorname^n\left(y_i-\widehat(x_i)\right)^2-\widehat^2\left(n-2\operatorname\left()\right).
Colin Mallows advocated this method in the construction of his model selection statistic ''Cp'', which is a normalized version of the estimated MSPE:
:C_p=\frac(x_i)\right)^2}-n+2\operatorname\left().
where ''p'' comes from the fact that the number of parameters ''p'' estimated for a parametric smoother is given by p=\operatorname\left(), and ''C'' is in honor of Cuthbert Daniel.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「mean squared prediction error」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.